Perceptual quality-preserving black-box attack against deep learning image classifiers

نویسندگان

چکیده

Deep neural networks provide unprecedented performance in all image classification problems, including biometric recognition systems, key elements smart city environments. Recent studies, however, have shown their vulnerability to adversarial attacks, spawning intense research this field. To improve system security, new countermeasures and stronger attacks are proposed by the day. On attacker’s side, there is growing interest for realistic black-box scenario, which user has no access network parameters. The problem design efficient mislead without compromising quality. In work, we propose perform attack along a high-saliency low-distortion path, so as both efficiency perceptual Experiments on real-world systems prove effectiveness of approach benchmark tasks actual applications.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers

Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to black-box attacks, which are more realistic scenarios. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. ...

متن کامل

Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

Deep neural networks (DNNs) are used to solve complex classification problems, for which other machine learning classifiers, such as SVM, fall short. Recurrent neural networks (RNNs) have been used for tasks that involves sequential inputs, such as speech to text. In the cyber security domain, RNNs based on API calls have been used effectively to classify previously un-encountered malware. In t...

متن کامل

Query-limited Black-box Attacks to Classifiers

We study black-box attacks on machine learning classifiers where each query to the model incurs some cost or risk of detection to the adversary. We focus explicitly on minimizing the number of queries as a major objective. Specifically, we consider the problem of attacking machine learning classifiers subject to a budget of feature modification cost while minimizing the number of queries, where...

متن کامل

Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Pattern Recognition Letters

سال: 2021

ISSN: ['1872-7344', '0167-8655']

DOI: https://doi.org/10.1016/j.patrec.2021.03.033